39 research outputs found

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation

    Get PDF
    High-quality, detailed animated characters are often represented as textured polygonal meshes. The problem with this technique is the high cost that involves rendering and animating each one of these characters. This problem has become a major limiting factor in crowd simulation. Since we want to render a huge number of characters in real-time, the purpose of this thesis is therefore to study the current existing approaches in crowd rendering to derive a novel approach. The main limitations we have found when using impostors are (1) the big amount of memory needed to store them, which also has to be sent to the graphics card, (2) the lack of visual quality in close-up views, and (3) some visibility problems. As we wanted to overcome these limitations, and improve performance results, the found conclusions lead us to present a new representation for 3D animated characters using relief mapping, thus supporting an output-sensitive rendering. The basic idea of our approach is to encode each character through a small collection of textured boxes storing color and depth values. At runtime, each box is animated according to the rigid transformation of its associated bone in the animated skeleton. A fragment shader is used to recover the original geometry using an adapted version of relief mapping. Unlike competing output-sensitive approaches, our compact representation is able to recover high-frequency surface details and reproduces view-motion parallax e ects. Furthermore, the proposed approach ensures correct visibility among di erent animated parts, and it does not require us to prede ne the animation sequences nor to select a subset of discrete views. Finally, a user study demonstrates that our approach allows for a large number of simulated agents with negligible visual artifacts

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    3D objects reconstruction from frontal images: an example with guitars

    Get PDF
    This work deals with the automatic 3D reconstruction of objects from frontal RGB images. This aims at a better understanding of the reconstruction of 3D objects from RGB images and their use in immersive virtual environments. We propose a complete workflow that can be easily adapted to almost any other family of rigid objects. To explain and validate our method, we focus on guitars. First, we detect and segment the guitars present in the image using semantic segmentation methods based on convolutional neural networks. In a second step, we perform the final 3D reconstruction of the guitar by warping the rendered depth maps of a fitted 3D template in 2D image space to match the input silhouette. We validated our method by obtaining guitar reconstructions from real input images and renders of all guitar models available in the ShapeNet database. Numerical results for different object families were obtained by computing standard mesh evaluation metrics such as Intersection over Union, Chamfer Distance, and the F-score. The results of this study show that our method can automatically generate high-quality 3D object reconstructions from frontal images using various segmentation and 3D reconstruction techniques.Postprint (published version

    QuickVR: A standard library for virtual embodiment in unity

    Get PDF
    In the last few years the field of Virtual Reality (VR) has experienced significant growth through the introduction of low-cost VR devices to the mass market. However, VR has been used for many years by researchers since it has proven to be a powerful tool across a vast array of research fields and applications. The key aspect of any VR experience is that it is completely immersive, which means that the virtual world totally surrounds the participant. Some game engines such as Unity already support VR out of the box and an application can be configured for VR in a matter of minutes. However, there is still the lack of a standard and easy to use tool in order to embody participants into a virtual human character that responds synchronously to their movements with corresponding virtual body movements. In this paper we introduce QuickVR, a library based on Unity which not only offers embodiment in a virtual character, but also provides a series of high level features that are necessary in any VR application, helping to dramatically reduce the production time. Our tool is easy to use by coding novices, but also easy extensible and customizable by more experienced programmers.Postprint (published version

    Footstep parameterized motion blending using barycentric coordinates

    Get PDF
    This paper presents a real-time animation system for fully embodied virtual humans that satisfies accurate foot placement constraints for different human walking and running styles. Our method offers a fine balance between motion fidelity and character control, and can efficiently animate over sixty agents in real time (25 FPS) and over a hundred characters at 13 FPS. Given a point cloud of reachable support foot configurations extracted from the set of available animation clips, we compute the Delaunay triangulation. At runtime, the triangulation is queried to obtain the simplex containing the next footstep, which is used to compute the barycentric blending weights of the animation clips. Our method synthesizes animations to accurately follow footsteps, and a simple IK solver adjusts small offsets, foot orientation, and handles uneven terrain. To incorporate root velocity fidelity, the method is further extended to include the parametric space of root movement and combine it with footstep based interpolation. The presented method is evaluated on a variety of test cases and error measurements are calculated to offer a quantitative analysis of the results achieved.Peer ReviewedPostprint (author’s final draft

    Disturbance and plausibility in a virtual rock concert: a pilot study

    Get PDF
    We present methods used to produce and study a first version of an attempt to reconstruct a 1983 live rock concert in virtual reality. An approximately 10 minute performance by the rock band Dire Straits was rendered in virtual reality, based on the use of computer vision techniques to extract the appearance and movements of the band, and crowd simulation for the audience. An online pilot study was conducted where participants experienced the scenario and freely wrote about their experience. The documents produced were analyzed using sentiment analysis, and groups of responses with similar sentiment scores were found and compared. The results showed that some participants were disturbed not by the band performance but by the accompanying virtual audience that surrounded them. The results point to a profound level of plausibility of the experience, though not in the way that the authors expected. The findings add to our understanding of plausibility of virtual environments.This work is funded by the European Research Council (ERC) Advanced Grant Moments in Time in Immersive Virtual Environments (MoTIVE) #742989.Peer ReviewedPostprint (author's final draft

    Evaluating participant responses to a virtual reality experience using reinforcement learning

    Get PDF
    Virtual reality applications depend on multiple factors, for example, quality of rendering, responsiveness, and interfaces. In order to evaluate the relative contributions of different factors to quality of experience, post-exposure questionnaires are typically used. Questionnaires are problematic as the questions can frame how participants think about their experience and cannot easily take account of non-additivity among the various factors. Traditional experimental design can incorporate non-additivity but with a large factorial design table beyond two factors. Here, we extend a previous method by introducing a reinforcement learning (RL) agent that proposes possible changes to factor levels during the exposure and requires the participant to either accept these or not. Eventually, the RL converges on a policy where no further proposed changes are accepted. An experiment was carried out with 20 participants where four binary factors were considered. A consistent configuration of factors emerged where participants preferred to use a teleportation technique for navigation (compared to walking-in-place), a full-body representation (rather than hands only), the responsiveness of virtual human characters (compared to being ignored) and realistic compared to cartoon rendering. We propose this new method to evaluate participant choices and discuss various extensions.This research is supported by the European Research Council Advanced grant Moments in Time in Immersive Virtual Environments (MoTIVE) grant no. 742989 and all authors were funded by this grant except for G.S. who is supported by ‘la Caixa’ Foundation (ID 100010434) with Fellowship code no. LCF/BQ/DR19/11740007.Peer ReviewedPostprint (published version

    A separate reality : An update on place Illusion and plausibility in virtual reality

    Get PDF
    We review the concept of presence in virtual reality, normally thought of as the sense of “being there” in the virtual world. We argued in a 2009 paper that presence consists of two orthogonal illusions that we refer to as Place Illusion (PI, the illusion of being in the place depicted by the VR) and Plausibility (Psi, the illusion that the virtual situations and events are really happening). Both are with the proviso that the participant in the virtual reality knows for sure that these are illusions. Presence (PI and Psi) together with the illusion of ownership over the virtual body that self-represents the participant, are the three key illusions of virtual reality. Copresence, togetherness with others in the virtual world, can be a consequence in the context of interaction between remotely located participants in the same shared virtual environments, or between participants and virtual humans. We then review several different methods of measuring presence: questionnaires, physiological and behavioural measures, breaks in presence, and a psychophysics method based on transitions between different system configurations. Presence is not the only way to assess the responses of people to virtual reality experiences, and we present methods that rely solely on participant preferences, including the use of sentiment analysis that allows participants to express their experience in their own words rather than be required to adopt the terminology and concepts of researchers. We discuss several open questions and controversies that exist in this field, providing an update to the 2009 paper, in particular with respect to models of Plausibility. We argue that Plausibility is the most interesting and complex illusion to understand and is worthy of significant more research. Regarding measurement we conclude that the ideal method would be a combination of a psychophysical method and qualitative methods including sentiment analysis.Postprint (published version

    Effective user studies in computer graphics

    Get PDF
    User studies are a useful tool for researchers, allowing them to collect data on how users perceive, interact with and process different types of sensory information. If planned in advance, user experiments can be leveraged in every stage of a research project, from early design, prototyping and feature exploration to applied proofs of concept, passing through validation and data collection for model training. User studies can provide the researcher with different types of information depending on the chosen methodology: user performance metrics, surveys and interviews, field studies, physiological data, etc. Considering human perception and other cognitive processes is particularly important in computer graphics, where most research produces outputs whose ultimate purpose is to be seen or perceived by a human. Being able to measure in an objective and systematic way how the information we generate is integrated into the representational space humans create to situate themselves in the world means that researchers will have more information to implement optimal algorithms, tools and techniques. In this tutorial we will give an overview of good practices for user studies in computer graphics with a particular focus on virtual reality use cases. We will cover the basics on how to design, carry out and analyze good user studies, as well as different particularities to be taken into account in immersive environments.Peer ReviewedPostprint (published version
    corecore